You have a set of examples,
and this set of examples actually say,
in this situation, you should do that.
Okay. So, very simply put,
that's what we call inductive learning is,
we have a set of examples,
which is a function from
states to outcomes,
which is the gold truth,
the gold standard which we want to learn.
And so, what we want to do is,
we want to find a hypothesis,
a function that behaves similarly,
or ideally exactly like F,
on the examples we have already seen,
and that has a good prediction quality
for future unseen examples.
Kind of elephant in the room here is that,
the hypothesis has to come from
some kind of a hypothesis space.
And this hypothesis space
is something that kind of in
the background determines a lot of things.
We've looked at these examples of curve fitting,
where we've looked at different hypothesis spaces,
linear polynomials,
quadratic polynomials,
order four polynomials,
order gazillion polynomials,
or something like that.
And you can see that what
the best hypotheses are
depends on the hypothesis space.
Okay. Sometimes, we have consistent,
we have consistent hypotheses,
which means they are okay on all of the examples.
Sometimes, we have non-consistent or
partially consistent hypotheses,
all depending on what you're allowed
to pick the hypothesis space. Yes.
Do you assume that the training set
does not contain all values?
Or does it really matter because
you're trying to learn this training set
and it contains all values,
you might know something wrong.
Yes. Right. If we have wrong examples,
we have to deal with that.
Okay. So, we might really say,
we need to have outlier detect,
Presenters
Zugänglich über
Offener Zugang
Dauer
00:03:58 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 11:06:39
Sprache
en-US
Recap: Inductive Learning
Main video on the topic in chapter 8 clip 3.